70 research outputs found
A heuristic explanation of Batcher's Baffler
AbstractBatcher's Bafflerâso named by David Griesâis a sorting algorithm that is of interest because many of its âcomparison swapsâ can be executed concurrently. It is also of interest because it used to be hard to explain.This note explains Batcher's Baffler by designing it. Besides including all heuristics, it has two distinguishing features, both contributing to its clarity and brevity: 1.(0) the (little) theory the algorithm relies upon is dealt with in isolation;2.(1) by suitable abstractions, all case analyses have been removed from the argument
Anisotropic intrinsic lattice thermal conductivity of phosphorene from first principles
Phosphorene, the single layer counterpart of black phosphorus, is a novel
two-dimensional semiconductor with high carrier mobility and a large
fundamental direct band gap, which has attracted tremendous interest recently.
Its potential applications in nano-electronics and thermoelectrics call for a
fundamental study of the phonon transport. Here, we calculate the intrinsic
lattice thermal conductivity of phosphorene by solving the phonon Boltzmann
transport equation (BTE) based on first-principles calculations. The thermal
conductivity of phosphorene at is
(zigzag) and
(armchair), showing an obvious anisotropy along different directions. The
calculated thermal conductivity fits perfectly to the inverse relation with
temperature when the temperature is higher than Debye temperature (). In comparison to graphene, the minor contribution around
of the ZA mode is responsible for the low thermal conductivity of
phosphorene. In addition, the representative mean free path (MFP), a critical
size for phonon transport, is also obtained.Comment: 5 pages and 6 figures, Supplemental Material available as
http://www.rsc.org/suppdata/cp/c4/c4cp04858j/c4cp04858j1.pd
On decomposing a deep neural network into modules
Deep learning is being incorporated in many modern software systems. Deep learning approaches train a deep neural network (DNN) model using training examples, and then use the DNN model for prediction. While the structure of a DNN model as layers is observable, the model is treated in its entirety as a monolithic component. To change the logic implemented by the model, e.g. to add/remove logic that recognizes inputs belonging to a certain class, or to replace the logic with an alternative, the training examples need to be changed and the DNN needs to be retrained using the new set of examples. We argue that decomposing a DNN into DNN modulesâ akin to decomposing a monolithic software code into modulesâcan bring the benefits of modularity to deep learning. In this work, we develop a methodology for decomposing DNNs for multi-class problems into DNN modules. For four canonical problems, namely MNIST, EMNIST, FMNIST, and KMNIST, we demonstrate that such decomposition enables reuse of DNN modules to create different DNNs, enables replacement of one DNN module in a DNN with another without needing to retrain. The DNN models formed by composing DNN modules are at least as good as traditional monolithic DNNs in terms of test accuracy for our problems
Automated Generation of User Guidance by Combining Computation and Deduction
Herewith, a fairly old concept is published for the first time and named
"Lucas Interpretation". This has been implemented in a prototype, which has
been proved useful in educational practice and has gained academic relevance
with an emerging generation of educational mathematics assistants (EMA) based
on Computer Theorem Proving (CTP).
Automated Theorem Proving (ATP), i.e. deduction, is the most reliable
technology used to check user input. However ATP is inherently weak in
automatically generating solutions for arbitrary problems in applied
mathematics. This weakness is crucial for EMAs: when ATP checks user input as
incorrect and the learner gets stuck then the system should be able to suggest
possible next steps.
The key idea of Lucas Interpretation is to compute the steps of a calculation
following a program written in a novel CTP-based programming language, i.e.
computation provides the next steps. User guidance is generated by combining
deduction and computation: the latter is performed by a specific language
interpreter, which works like a debugger and hands over control to the learner
at breakpoints, i.e. tactics generating the steps of calculation. The
interpreter also builds up logical contexts providing ATP with the data
required for checking user input, thus combining computation and deduction.
The paper describes the concepts underlying Lucas Interpretation so that open
questions can adequately be addressed, and prerequisites for further work are
provided.Comment: In Proceedings THedu'11, arXiv:1202.453
Go To Statement Considered Harmful
This is a digitized copy derived from an ACM copyrighted work. It is not guaranteed to be an accurate copy of the author's original work. Editor: For a number of years I have been familiar with the observation that the quality of programmers is a decreasing function of the density of go to statements in the programs they produce. More recently I discovered why the use of the go to statement has such disastrous effects, and I became convinced that the go to statement should be abolished from all "higher level " programming languages (i.e. everything except, perhaps, plain machine code). At that time I did not attach too much importance to this discovery; I now submit my considerations for publication because in very recent discussions in which the subject turned up, I have been urged to do so. My first remark is that, although the programmer's activity ends when he has constructed a correct program, the process taking place under control of his program is the true subject matter of his activity, for it is this process that has to accomplish the desired effect; it is this process that in its dynamic behavior has to satisfy the desired specifications. Yet, once the program has been made, the "making " of the corresponding process is delegated to the machine
- âŠ